Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
May 16, 2023
What are focused research organizations? Which kinds of research projects lend themselves to the FRO model? Researchers in academia frequently complain about the incentive structures around funding and publishing; so how do FROs change those dynamics? Why must FROs be time-limited, especially if they're successful? Who's in charge in an FRO? How does "field-building" help to improve science? What effects might large language models have on science?
Adam Marblestone is the CEO of Convergent Research. He's been launching Focused Research Organizations (FROs) such as E11 bio and Cultivarium. He also serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science. Previously, he was a Schmidt Futures Innovation Fellow, a consultant for the Astera Institute, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He also previously helped to start companies like BioBright and advised foundations such as the Open Philanthropy Project. His work has been recognized with a Technology Review 35 Innovators Under 35 Award (2018), a Fannie and John Hertz Foundation Fellowship (2010), and a Goldwater Scholarship (2008). Learn more about him at adammarblestone.org.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Adam Marblestone about focused research organizations, field building in science, and beneficial applications of AI.
SPENCER: Adam, welcome.
ADAM: Thanks. Great to be here.
SPENCER: I'm really excited to have you on because I think you have some of the most innovative ideas about how we improve science and do science in new ways. So I want to jump into this conversation starting with focused research organizations (FRO). Can you tell us about where this idea came from? What is the idea? And why are you excited about it?
ADAM: Well, this is an idea that I've been working on full time for the past couple years. I'm trying to build the kind of incubation structure and market-making structure to create more of these types of projects that we're calling focused research organizations. In many ways, the idea is really obvious. And if you're not a scientist, it might be surprising that something like this doesn't exist. But a focus research organization is basically a special purpose, nonprofit organization that has a very fixed and specific mission: to create some kind of tool system, data set, or other advancement that benefits the scientific process. It's sort of organized a little bit like a startup. Finite duration is one key feature that it has; it has a mission of maybe five or seven years. If you complete that mission, then it's not that the work has to entirely stop. It could spin off companies. It could create longer-lived nonprofits. But the FRO itself is sort of a finite duration effort as opposed to making a permanent institute. So it's a sort of startup-like team to build something for science.
SPENCER: Now, for those that aren't as familiar with how science operates today, could you just contrast this with the normal operating procedure?
ADAM: Yeah. I guess what's interesting is, you could say that, “Of course, there are multiple different kinds of scientific institutions that exist.” But maybe post sort of 1960s, we've standardized pretty heavily the funding structures around a certain model of project, which is that a principal investigator, usually a professor, will apply for a grant to do a certain body of research in their lab in a university, where the grant is sort of enough to hire some graduate students or maybe some postdoctoral researchers to do a certain body of work and publish papers. Some key features of that is in some fixed institution (usually a university or maybe within another institute), it's given either to a single professor or to kind of a group of professors that will then loosely collaborate, but it exists within the structure where professors are training students and postdocs to become scientists, and then participate in that same grant-making structure. There's nothing that is otherwise exactly analogous to the way that we form startups to go after a particular market or product development goals within science, really.
SPENCER: So I guess the way that I understand that process — please help me if I'm wrong — so instead of individual researchers applying for grants and kind of building up a lab with their PhD students, there's going to be some usually much larger amount of money allocated for (let's say) five years to try to solve some specific scientific problem and that might involve 20 scientists all working together. Is that accurate?
ADAM: Right, exactly. And if you think about the university traineeship, apprenticeship, or mentorship model, one feature of that model is that often each of the graduate students is doing their own thesis. Each postdoc is trying to get maybe a first authorship paper on a particular subject that's distinct from what all the other postdocs are doing. So it's kind of driven by novelty, rather than driven by the sort of working backwards from a functional purpose in the same way that an industrial effort will be working backwards, where they say, “Which engineers we have to hire? Which project managers do we have to hire? How many people do we really need to be on this team?” It's more driven by the individual careers of the individual researchers organizing around publishing papers. So as a result, there's less customization of the team size and structure and workflow to the kind of engineering needs of the project as opposed to what kind of background structure in which scientists are publishing and training.
SPENCER: Could you give us a few examples or references to make it really concrete?
ADAM: Yeah. So just as an example of two that exist already. There's one called E11 BIO that we helped to get started, which is focused on a faster, cheaper, better technology for mapping circuits in the brain, starting with the mouse brain. And that's a group that's based in Alameda, California. They have a combination of scientists that are more like project managers or technical support. But they're trying to integrate together a number of different biochemical, biological, hardware, automation, and computational technologies all into one system, which will be a faster, cheaper, better way of mapping circuits in the brain by painting different neurons in the brain with effectively different kinds of color codes or molecular codes. So that's just one project that reached a level of complexity where it didn't seem to parcel it very well, again, into these sort of individual papers or theses. It felt more like an engineering team was needed to be put together beyond a certain scale to deal with all the complexity of building that brain mapping system.
SPENCER: So it almost sounds like a mini-Manhattan Project, where you're kind of saying, “Okay, we all have this common objective. We're going to bring together all these great minds and we're going to just focus on it and get it done.” Do you think that's a fair analogy?
ADAM: Yeah, it's a very good analogy, very goal driven. And of course, not all science will flourish under that model, not all science is pursuing a particular goal. Trying to coordinate a larger team isn't always the way to get creativity or new inventions. But for some problems where you're trying to build something — like building this brain mapping system that has all these parts that depend on each other and have to work together — you need this kind of monolithic focus.
SPENCER: So how the concrete objectives of the FRO would be mapped out? What is it really trying to achieve? Are there certain benchmarks in terms of resolution or accuracy or something like that?
ADAM: Well, this team that's doing this spent the better part of a year, really, and then more once the FRO actually got started, trying to map out all of that: How many different proteins do you need to label the neurons? How do you get the neurons to express those proteins? How are you going to test all of those things? What kinds of microscopes and chemistry do you need? So they spent a lot of time breaking it down into some particular objectives and sort of sub parts or dimensions of performance, as well as dimensions that measure parts of the system working together. So, kind of halfway through the project (about two to two and a half years in), they're going to be judged against those intermediate milestones. Later on, it's ultimately measured by some notion of a kind of catalytic impact on the world. But before you get there, there are going to be metrics like, “What size of chunk of brain can you actually measure?” And, “Can you actually see the circuitry?” And it's quite an ambitious set of goals but it's definitely been parsed and laid out into quite a detailed roadmap of steps.
SPENCER: So what's the second example?
ADAM: The second example that was sort of set up in the first batch that we helped set up is called Cultivarium, that's based in Boston. It is kind of a biology heavy engineering team, where the premise of that project is that biological researchers and people in more industrial biotechnology only use a very small number of sort of workhorse microorganisms when they are trying to produce chemicals or even when they're trying to go in and study biological mechanisms — like try to find the next CRISPR technology or the next really great molecular tool for biology — they only use a small number of organisms. E. coli is a famous one or yeast is another really common one. They use a relatively small number of organisms. And the reason for that, at some level, you could say, is that if I just take another organism, I take something that's growing in my sink or something. Finding all the different recipes that you need — How do I grow that thing? What does it need to eat? Or how do I put DNA into that thing? What are the things I can do to that cell that will cause the DNA to go across the cell membrane and into the organism? Or what pieces of DNA would I need to put in there so that they would actually be able to propagate and I could use that to engineer the organism — they're very complex recipes, and most of those recipes just aren't known. And so if you're a university researcher, typically, you're trying to get a grant to study some scientific subject about that organism or to do something specific, it's hard to spend all the time booting up and inventing your own recipes for a new organism that hasn't really been studied before. So what Cultivarium is trying to do is make it faster and easier to actually find the recipe. So it's making robotics and systems that will try a lot of different conditions and try to really rapidly hone in on what are the recipes you need to do the most basic things in research with these organisms, which then could be built upon by other scientists. Once the basic recipes are there, they can then actually use that for studies that go in a particular applied direction.
SPENCER: Now with that example, I wonder, why not just do it as a startup? What about that problem makes it good for an FRO rather than a startup structure?
ADAM: Yeah. Well, one question is how do you want to actually disseminate the information? So I think there certainly are parts that could work with startups. Historically for other parts of microbe engineering — like Ginkgo Bioworks for example, which is now quite a big company that originally was funded by DARPA and sort of supported in a more research-y way — they're working mostly on this smaller set of organisms that I mentioned. But it's kind of genetic engineering, sort of as a service or as a basis for forming new ventures. So there is certainly such a thing in industrial biotechnology that you can do as a service, but a lot of the question is where does the value get captured and who do you want to access this thing? So if I just want to put my recipe book in a way that's accessible to all scientists, this is a pretty hefty capital investment, just to release something where you're basically making a public good, which is the sort of knowledge of how to do these recipes for dealing with these organisms. The other thing is if the organism itself has ultimate economic value or is it some product that you're making with that organism. What Cultivarium is really focused on is what are the recipes for dealing with this organism. They're not really focused on only one application because they're trying to be very broadly catalytic to the field. So I think there are certainly ways and there are startups doing adjacent types of things. But the hope with the FRO model is that we can put a really sustained amount of engineering and capital in a way that would benefit science most broadly. And then it seems like there are some real tradeoffs there in terms of how it would be constrained if you were trying to get venture capital money as the primary way of doing this.
SPENCER: The two examples you gave are both biology, but my understanding is this idea applies much more broadly to many types of science. Can you talk about that? How would this apply outside of biology?
ADAM: It just comes back to the question of what looks like a deep tech company and what looks like an FRO? I think you can ask that question outside biology. One example I like to think about is fusion energy. Historically, big, big international projects or things that were being done in the DOE national labs, or maybe the main way of doing fusion research recently, venture capitalists have started putting billions of dollars into companies that are making fusion reactors, which is really, really kind of amazing, surprising, and exciting. However, there are still gaps, even in fusion energy, that have come to our attention as we've been looking for FRO, which are cases where there's a capital-intensive and engineering-intensive piece of work that needs to be done that would normally require a sort of VC backing and is really important for the field, but doesn't necessarily generate the kind of VC returns or is not primarily oriented toward the market in the same way. So an example of that in fusion would be sources of neutrons that would be used to characterize the materials that you need to build the reactors out of. So it's one thing to sort of demonstrate, “Hey, here I can actually get a self-sustaining fusion reaction that's sort of on the path to what a company needs to do to build out a particular genre of fusion reactor.” But down the line, as you actually scale these things up and make these things big, there's underlying material science questions, because fusion reactions are tending to blast off lots of neutrons that might basically melt the walls of your reactor. So what if I want to have fusion like neutrons, and I want to test the material science and understand the material science of reactor walls. That's something where I need to build a neutron source that looks like what comes out of fusion and then use that basically for a scientific purpose. So this idea of a fusion prototypic neutron source is one that came up very quickly for us, in thinking about how these types of ideas apply for fusion. And again, this is the kind of thing that national labs, government, or science is completely unaware of: the existence of these types of projects. But the question is: is there sort of an agile mechanism to spin up a new one and get funding for it, and what's the kind of market for these in a philanthropic sense? If I just wanted to go and start a fusion prototypic neutron source project, I would have a very hard time doing that normally. So, that's one example in the energy space. In climate more broadly, just as in biology, a lot of the questions have to do with generating a better dataset or sort of faster, cheaper, better measurement technique. We see the same issue emerging in a bunch of climate-related problems. So it turns out, just as an example, that just the amount of different greenhouse gasses that's being emitted from some object — let's say, from agricultural fields that might have cows in it or might have plants in it — it doesn't measure the amounts of different greenhouse gasses that are coming off of that field. It is a really tough engineering intensive technical problem that isn't really solved, that needs a kind of startup-like team. But if my business model is to just measure how much methane or something that cows are producing or this field is producing, that may not be a VC-backable deep tech company. And we're finding a number of examples like that in the climate space.
SPENCER: So would it be fair to say that when there's a scientific project that can be done, kind of piecemeal by individual researchers through writing papers and maybe small collaborations, then it can be handled by the existing scientific establishment a lot of times, especially if they can get rewarded by being published in top journals. On the other hand, if you can do a scientific project that you can capture a lot of the value from, then it can often be done in the startup world or you can get venture capital investment and build a company out of it. But then there's sort of this gap where the need is not satisfied by either of those. Is that where FROs come in?
ADAM: Yeah, something like that. Something where you need the capital structure and team structure of a deep tech startup, but you're really trying to make something that looks more like a public good. Or maybe there ultimately will be pieces of capturable value but there are also a huge amount of that you want to give out to the world or where you need to really deeply interact with kind of a back and forth with the scientific community, or discovering what the applications are before you could have a really convincing case for this in a purely market driven way. So ultimately, maybe there'll be billion-dollar companies or something that spin out, but it would be shooting yourself in the foot to try to guarantee that and capture all of that from the beginning.
SPENCER: Can you give us a sense of the scale of these organizations? How many millions of dollars are we talking about here?
ADAM: Yeah. They're actually depending on how you think of it. They're not that big. They're often sort of a 20 to 30 person kind of startup-like team. So that's more like maybe a Series A level deep tech startup. It's not necessarily a giant Manhattan Project level effort or anywhere, anything near that. And that's actually part of why we think it's exciting to generate lots of ideas for these things and try to create almost a market for them to exist. It's because with a billion dollar scale project, that's like something that the entire field really has to have a consensus almost that that is a good idea to do. Whereas for these ones, maybe you can take bets that are more on the size of like a DARPA program which is often like 10s to hundreds of millions of dollars. And here, we're kind of looking at low 10s of millions of dollars per project, which is a lot. And it's something that would be very different from what scientists could get a grant to do in any normal circumstance. But also is not some kind of giant mega project that is the sort of dominant force in a field or something like that. It's really like this relatively small group of people.
SPENCER: And you expect the money to mainly come from private philanthropy, like high net worth individuals?
ADAM: Initially, yeah. I think that we're trying to pursue this along a few different dimensions. So far, the main way we have been doing this and sort of bringing these types of projects to the attention of philanthropists. Ultimately, my hope is that we surface a bunch of examples of ideas like this to people that want to do them. And then ultimately, maybe government agencies also take up the mantle of diversifying the types of structures they use to fund and organize the research. It's actually not a very far cry, for example, from what ARPA agencies can already do in terms of scale, size, and deliberateness of the project. It just requires this additional step of sort of forming a totally new team that just focuses on one thing for five or six years. But you could imagine sort of ARPA, DARPA type agencies, including their various new ones, like ARPA-H in the US, ARIA in the UK, and a few more privately funded ones. You could imagine them kind of creating branches that are more directed towards focused research organizations. Or maybe there are other variants even, that the broader question we're trying to get at here is creating a conversation essentially about, “Are we in kind of too much of an almost monotheistic world that there's only one way of organizing research? What if we have a much more kind of polytheistic, structurally diversified approach to funding and organizing research?”
SPENCER: When I think about projects that I know that pre-existed FROs and what seems FRO like, I think of something like the Large Hadron Collider, where it seems like that required tons of different scientists coming together and all working towards a common goal. But I imagine this sort of funding structure and how that came together was very different. I don't know how much you know about that project, but how would you compare that on what you're doing?
ADAM: Yeah. I don't know much about the specific history of that. But even if you think about CERN, that's the European Center of Nuclear research, it's sort of a postwar kind of giant Pan-Europe phenomenon. Again, there's lots of bottom-up, distributed, single-professor-few-graduate-students kind of research. And then there's a few big mega projects. And those are maybe once a decade kinds of things at most in any given field. It's kind of like Europe is only going to make one giant particle accelerator, and that's going to be CERN. Or the US is going to make one giant Gravitational Wave Observatory, or Hubble Space Telescope, or something like that, or the Human Genome Project. And definitely, these have some similar features in the way that they're goal-directed, they're trying to be catalytic, make a tool or system professionalized deliberate effort. But the question is, really: Is there a market for it? Is there a general mechanism? If I'm someone who has a good idea, would it take me my entire career with very low probability of success to try to convince the rest of the scientists that they should lobby the government or something to make the next Large Hadron Collider? Or can I just kind of go and send a proposal to philanthropists or to the organizations that are trying to catalyze this and, maybe within a year or two even, be able (if it's a really important idea that scientists want) to spin up something like an FRO on that? So a lot of it is about how general who has access to proposing these things and how quickly can it be done?
SPENCER: The thing I think about with regard to FROs is the incentive structure. Because a lot of people complain about the incentive structure in academia, just because there's so much pressure to get published in top journals. And if you don't get enough publications in top journals, you're unlikely to be able to stay in the field. You're unlikely to get tenure track jobs and eventually get tenure. So what do you think about how FROs shift incentives relative to academia?
ADAM: Yeah, that's a good question. I think we're early in understanding this because it's going to also depend on the scale at which all these things get to and the extent to which (if you want) the sort of meta science community broadly (or whatever we're calling this) succeeds in diversifying the different options. Because the question is always, “What happens afterwards?” So whether I go to a new institute or an FRO, what if I decide I don't like it anymore? FROs, of course, sunset in some capacity after five or six years. If I haven't been in that intermediate period of time published or done the other usual things that people want me to do, am I sort of off the job? And our take on this is that it definitely depends on the team and the problem and the situation. But there are definitely a few things that mitigate against this kind of risk. It's very hard to get new structures to grow because people are always going to be worried, “What if I have to go back into the existing structure?” There are definitely some things I think that mitigate against that. So one is that it has become much more possible — and I think FROs will facilitate — entrepreneurship by scientists. Again, FRO is mostly focused on creating public goods, but you're also developing a very professional organized team of people, to the degree that there are particular verticals that actually require VC backing or a for-profit approach to go after and reach scale. After the FRO, the FRO team is very well positioned to sort of move into a new industry and create spin off companies or get acquired, or sort of fold into the industry in a new area. That somewhat constrains the types of topics that one can do this on — like maybe there's some obscure area of geology or something like that — where there just isn't any potential to create a new industry based on some new science or technology. Maybe in that case, the geologists are going to have to find something else to do and that might be difficult. I also think that the traditional notion that unless each person is focusing on their own totally distinct publication throughout all different phases of their career is the only path, that's sort of the academic approach, is a bit of a simplification just in general. There's just many different paths: one can go into industry, one can potentially go to another FRO or to another institute, or start something new, as long as one has the skills and the network and the access. So in some ways, it's kind of a cultural shift. You join the FRO, and that's just opening up to many, many wider spaces outside of academia per se. It is also possible that some people who leave FROs will actually go back into academia, just as long as the FRO kind of recognizes that's the career path that that person wants to take, maybe that person does want to do first off their paper or something like that. But then the engineers or other people doing something else might structure their work differently. So you can also be very deliberate about this by designing the personnel strategy for a given FRO, as well as the roadmap or sort of transition plan of what happens after. As long as it's just designed deliberately with those things in mind, I think that this can be an on-ramp to a variety of different paths.
[promo]
SPENCER: You mentioned a few times that the FRO is time limited, it shuts itself down after a certain period. People might wonder why that is? If it's working well and that's producing scientific knowledge, why not keep extending it? Why not make it into another Center for Advanced Studies, or something like that, that just keeps working? So, what are your thoughts there?
ADAM: Well, these things could happen in some cases, actually, that FRO could sort of seed a new institute or something like that, or could also fold into an existing institute. I think those are definitely possible things that could happen. But again, the question is, do we have a general mechanism for solving these types of gaps, these sort of FRO-shaped gaps? And I think that if every time you do it, you're actually requiring that you endow a new institute or do something that's permanent, then first of all, that creates some significant multiple on the actual cost of the project for the funder. But also, it can kind of dilute the mission in some sense. If the goal is to build some tool system, data set capability, once you're done, ideally, you're done. And otherwise, I think you risk contaminating what is meant to be project-based or focused-on impact of a certain kind that has a very, very clear thesis about what you are building. And that kind of ends up becoming more about who's directing this institute and how do you get tenure at this institute, and how does it keep getting grants? So we're kind of putting this in as a forcing function for this sense of urgency in the FRO and the sense of clarity of purpose. But I do think that there are ways, depending on the case. If you're deliberate enough, maybe an FRO could have some transitional period where it gets contracts. Let's say we have a brain mapping project. Let's say it actually succeeds in making a faster, cheaper, better brain mapping. Maybe, initially, it's not necessarily going to be a whole new institute. It could be kind of a fee-for-service kind of brain mapping endeavor. But if it then becomes really the place to go for brain data, maybe you can find a path to creating a permanent institute. That wouldn't be a bad thing. But in other cases, maybe they'll just make up a huge amount of kind of non-recurring engineering or sort of capital intensive work. Maybe to make a piece of open source software or something like that, that's sort of FRO-scale burn rate. But then after that, people go off to different tech companies or different places where their skills are needed. And the open source software just is maintained in a much smaller foundation or something like that, that has a much lower burn rate and a more traditional, more grassroots approach to open source. That could just happen without there needing to be a permanent institute that continues to burn at the rate that an FRO would.
SPENCER: Since FROs are some cross between a scientific lab and company — even though they're doing not for-profit purpose, there's sort of a company-like element to them, like managing this larger team — how do you think about the structure of them, like who's in charge and what are the different roles there?
ADAM: It's a good question. I would say it's startup-inspired but also meant to be, in some ways, a bit more flexible. You don't necessarily have to imbibe everything that startups do. And that also comes with some of the roles, like running the project (again, with nonprofits let's say) is not the same as ownership. It is not necessarily tied to financial ownership. It is not necessarily tied to control in the exact same way. Being the CEO of an FRO is not exactly the same as being the CEO, COO and majority owner or something like that of any startup. I think there are a lot of things that we're trying to take from the culture of startups, like how project management works, and how very deliberate structured meetings are. It's a little bit less like “it's a startup.” But more of, as startups, given their structure and given their constraints, have just evolved a lot of really useful techniques: how startups go about hiring a team in a pretty fast and pretty precise way; how they structure meetings and project management and people management in that type of structure. So we want to kind of borrow a lot of things from that. But it doesn't need to have the aspects that have to do with control or ownership necessarily. And of course, there are many different kinds of legal configurations that could make for an FRO.
SPENCER: Do you draw a distinction between who's in charge of sort of operating the organization versus who's in charge of the science? Or would that be one person who's in charge of both the science and the operations?
ADAM: So far, we still have a kind of top level leader role as sort of a CEO or director role in each project. Although I think in some cases, that may be less final. And that does a little bit of borrowing from a startup. But it is more science-driven. So I think you can have versions that are led primarily by a scientist. And then, there's a COO that's kind of helping that scientist or their team to execute on the vision. And so in terms of ultimate control, it kind of depends on which part of it you're actually talking about. Although so far, we still do in each of the existing ones have a notion of a sort of CEO-like role, that's kind of ultimately responsible for reporting back to the funders and ultimately responsible for outcomes. And then in different cases, those people are more like scientists, or more of an operational background, or a mixture. And then there's also a significant aspect where we're trying to sort of support and coach people that are primarily scientists on the more operational side. So, it doesn't necessarily have to be Elon Musk or something running every FRO. It's meant to facilitate an entrepreneurial-minded scientist to actually be able to do the job.
SPENCER: So let's transition topics a little bit, but it is still related. Let's talk about field building in science. How do we make science better through kind of building fields?
ADAM: I guess my general orientation right now to these topics, FROs being just one of them around what are the sorts of factors other than just having a really good idea or what are the factors that have other than the kind of scientific truth that drives scientific success or progress? And you can imagine that the notion of sort of field building or how a field gets built would be kind of unnecessary. Maybe you have a model of science in which it's very naturally perfectly self-correcting. So maybe you have a really good hypothesis, and then someone does an experiment, and it's false or it's true. If it's true, then it will get published, it'll be cleared. People will build on that. So there's one model where, basically, if you have breakthroughs, then you're gonna get fields created based on those breakthroughs. And it's just very natural and organic. And so to me, the field building question, whether you need something or if you want meta-science infrastructure of the world, things that support more field building, it has to do with what will be the non-scientific factors (if you want) that would cause that process — that natural process of growth and finding truth and finding new ideas — to sort of fail or get stalled for some reason. Are there sociological factors that actually would block progress? And then what field building would, in some sense, do is actually try to counteract those factors. Or maybe there's just some stochasticity to it. So maybe, if you have a really good idea or hypothesis, the probability that will blossom into a new field, maybe there's some probability that will happen, but it won't happen every time. Can we increase the probability that that happens? The other thing to say about this is that it's not really clear what a field even is or why a field needs to exist, or a bunch of questions like that. Michael Nielsen and other people have been writing and thinking about some of these questions. It doesn't seem to me that this is now much less like a formalized experiment than the FROs are. I think people are just starting to think about what a field even is. And I haven't studied this in a rigorous kind of science-type way. Anything that I have to say about field building is more just thoughts or observations from having tried to be involved in various fields and just read a lot on the object level, but there's not really a formalized notion of what field building is.
SPENCER: So could you give us a couple of examples where you felt like the field stagnated, where something went wrong? And then, we can use those to think about how to do field building better.
ADAM: Yeah. I tend to think about this most easily by thinking about fields that are kind of have been or are somehow on the fringe. But then you can say, “Well, is there any fundamental reason why they should be on the fringe?” It's sort of a little bit hard to kind of separate your emotional response to something, which is sort of on the fringe and then say, “Well, why? Is that inherently fringy?” Or could we imagine a sort of slightly different parallel path of the universe, where that just seems really normal and obvious thing to do? An example that comes to mind is Cryonics, the freezing and unfreezing of big hunks of biological tissue into vitreous ice. So you can kind of put something in suspended animation, but without damaging the cells. There was some work in the 50s and 60s in trying to do that. But then ultimately, the cryo-biologists sort of said, “Look, this is not what we're going for. This is not legitimate science. There's some set of people in the field that seem like they're trying to fight death, or maybe they're making overly large claims about it. Maybe they're trying to start selling cryonics as a product prematurely.” And so as a result, the more academic cryobiologists kind of say, “Well, not only is this ethically bad to do, but it's probably just impossible. There's not a firm scientific basis for it.” So I guess what I'm trying to say is, you can have a dynamics within fields where every field is going to start out kind of very vulnerable; it's not going to have a perfect scientific basis at the beginning. And when something gets started, then there's these two paths: one is that the sort of scientific evidence-based can kind of gradually grow and refine over time. But something that can stop that is if, let's say, someone tries to prematurely commercialize something, or prematurely make big claims about it, then the rest of the field says, “No. This is all kind of full of it.” But then that actually then prevents the mainstream scientists from working on it in the first place, in which case now comes back a couple of decades later, and it's like someone comes and looks at them and says, “Well, there are very few papers about there. There's very few validated results.” But that's not because there couldn't have been validated results. It's because the scientific community itself rejected it for sort of sociological reasons and stopped working on it. But then the result is that at any given point in time, it actually looks like it has a pretty weak evidence-based. So I think cryonics is an example of that. I can call that cryonics. But you could also say, “Well, this is just totally the natural way. We have a huge shortage of organs for transplantation. Why don't we just have vitrified organs, just kind of in a big bank of organs?” Well, that's also because of this failure of the cryonics field. So maybe if it had started as the subfield of just purely trying to look at organ transplantation, and no one ever talked about it as cryonics or sort of suspended animation for humans, it would have been able to take off in a totally different way. I think that's certainly a dynamic that exists across many, many fields that might have some core of scientific possibility but have ended up being associated with something that's on the fringe or are ethically questionable, even though there are actually good, ethical, legitimate uses of something that could have been discovered but weren't, because they were rejected and the field sort of failed to take off.
SPENCER: Yeah. What are other examples where that's happened?
ADAM: I think you could say that that might have happened in a number of fields. One would be this idea of sort of ocean iron fertilization. And, again, I'm not trying to advocate necessarily that everyone should do cryonics, or that the world should adopt ocean iron fertilization. In fact, there's a huge amount of scientific work and questions about whether those things would be a good idea. But this idea of getting micro-algae to grow more in the ocean to try to draw down carbon from the atmosphere by dumping nutrients in there like iron. This is another case where there's a lot of scientific open questions. There's a lot of questions about how and whether you should do that. But as a research field, it seems like a good idea to at least be able to ask those questions in a controlled setting. But in that case, there was this guy named Russ George, who went out and basically started sort of just trying to do it in a way that didn't have buy-in from the rest of the community. And there were a number of companies that tried to start actually just selling ocean iron fertilization for carbon credits, in a kind of mini boom of carbon capture. That was a decade or two before the work on carbon capture that we're seeing now. That may have kind of poisoned the field by meaning that the scientists who would have needed to actually be involved to make a legitimate, validated version of that as a set of research questions couldn't because Russ George sort of put, in part, something that may have poisoned the reputation of the field at that early stage. What are the other things that might be like that? Maybe you could even argue that some of the early work on AI safety (or something like that) had some of that character. Now, big AI companies have safety teams and stuff like that, but how much of the sociology was driving that taking longer to become widely adopted?
SPENCER: So what do we do in situations where just for sociological reasons, some branch of science kind of gets a bad reputation or has a bad vibe around it, and therefore is viewed as not respectable? And therefore, good scientists don't want to do it and maybe they disparage it.
ADAM: It's a hard thing to do. I think that I see these as features of the discourse — maybe you could say that it also comes from an overly centralized approach to the funding — but ideally scientists and the people that fund science would be able to distinguish a field in which maybe there's an association with something that could be problematic. Maybe selling carbon credits before ocean iron fertilization works is, in fact, a very bad thing to do, but that need not pollute the core research question, “What actually happens if you put iron in the ocean in different conditions?” I think these are things that I'm not sure we have a great solution to. One hope that I have, I guess, is that there are sort of approaches to science that are more about just generating data and are less hypothesis-based. It's not so much that we're doing ocean iron fertilization or something because our hypothesis is that it's going to work. But what if we just had a map of the ecology of the ocean? Or ultimately, maybe we could simulate the ecology of the ocean or something like that. This is a little bit speculating, but if you just have kind of a purely data-driven approach with less of an agenda — if we just really understood the physics really well of ice crystal formation in biological samples (that is not cryonics) — we will have better foundations overall without having to kind of go all in on a particular hypothesis or agenda that becomes very politicized. Some fields, I guess, that you might have imagined would have seemed very speculative or questionable in some way — like quantum computing or cryptocurrencies — may be surprising some, on the other hand, like success stories. I feel like quantum computing is a field where we haven't seen a working quantum computer doing things that are very useful for humanity. And yet, thousands and thousands of physicists and billions of dollars of funding are going into kind of coordinating around this very speculative or futuristic idea of quantum computing. Maybe that is because in the end, at the very core of that field, there's a much more crystalline definition of what success looks like. In kind of strong theoretical foundations, you could write that down mathematically what you are actually going for. And so even though it's uncertain if we'll be able to do it, it has such strong theoretical foundations. I think progress can be faster or more robust in something like cryptocurrency, where there's a kind of algorithm that you can get some more theoretically driven fields. I don't have the solution. I just think it's something that is worth thinking about more as a kind of systematic bottleneck to progress in sciences. It's one of these sociological factors that can kind of hold back areas of investigation, even though there might be a very legitimate way to pursue that investigation.
SPENCER: I've heard that with Alzheimer's treatment, there were a number of top people that all had a particular theory of how to treat Alzheimer's, and that they ended up setting up a lot of the funding leaving not that much for other approaches. But now we're seeing the dominant approach not perform well, and some have been claiming — I'm not an expert in this by any means — that might have set the whole field back something like a decade. Do you know about that case in particular?
ADAM: I haven't studied this in a huge amount of detail. This is one case where I do think that, to the kind of macro-claims, I guess I'm behind in the biological sciences. One is sort of diversifying the structural ways of doing the research, or the structures of funding the research is important. And the other being sort of hypothesis-free kind of technology for getting to a very fundamental measurements and datasets, but in a kind of hypothesis-free way. If we could just measure how all the proteins are changing in an Alzheimer's brain, maybe we would have seen these multiple types of routes to think about Alzheimer's, whereas the amyloid beta crowd is kind of looking under a streetlight phenomenon. But if we just have much more light in many more places through better tooling, that's something that could help with that type of problem in the future in biology. But that does seem to be something that has happened.
SPENCER: So, what are some fields that you think should be built or would be promising to be built, that haven't been built yet?
ADAM: One thing, just in general, I think maybe it sounded from what I was saying that I think fields are bad. I think fields are very good and very necessary. And there's a reason that fields need to exist. It's not enough, particularly in areas that are less grounded in theory and need to be more empirical for there to be just a few researchers kind of creating ideas on things even in computational fields like AI, machine learning type fields. Unless you have a certain number of researchers working on something, unless you have a certain critical mass of resources, journals, peer review processes, standards, benchmarks, and all that, it's very hard to judge progress. And a person from one field may not be able to really judge progress in another field. And so you do need the emergence of fields for those reasons, as well as for training. So within biology, the areas where they feel like they have just a few groups working on them, maybe you could say, “Well, if they just had kind of enough of a breakthrough, then other researchers would start coming in.” And maybe that's true but maybe you need to get them to critical mass in order to even validate whether there is a breakthrough there. And so, a more active field building will be needed. So, some examples of that — just that are sort of coming across my desk — one is the role of quantum effects in biology and quantum spins. We know that chemistry is really important, obviously, in how biological organisms work. But what about more quantum effects that are not necessarily about which molecules are forming but about things like the spins of those molecules, the spins of the electrons in those molecules? So quantum effects. Another one that I think is really interesting that we're thinking about is the role of platform technologies or FRO-like projects in helping to understand the role of the immune system in brain disorder. So Rebecca Brock is someone who's been really in developing and promoting this idea. But what if we think of this as psychiatric disorders are actually more like autoimmune disorders? That's a possibility. But there's not really a critical mass of researchers that are pursuing that type of hypothesis.
SPENCER: One thing about this topic that I find a little unsettling is it's not clear to me whose role it is to build a field. You can imagine individual researchers who are excited about setting something and they try to convince their colleagues to do it. But who's supposed to build a field? Is it the job of the government? Is it universities or individual researchers?
ADAM: Great question. Yeah, exactly. And this is very much one of these examples where I think there's great things about the sort of untethered nature, sort of the Vannevar Bush research ecosystem that just gives individual researchers grants to do the things that they want to do. But then maybe it doesn't have enough structural roles. Maybe there's not enough supporting roles. If you think about a movie production, there're the actors. But there's also producers, and there's also set designers and those things. And maybe those roles need to be more front and center. And so there certainly are funding models. And I would particularly cite the DARPA, ARPA type model, which is an empowered program manager model, that's typically a program manager, who has a strong technical background. They usually have a PhD, maybe 5, 10, or 15 years out of their PhD, sometimes longer. They usually have a PhD in the relevant technical fields, a very deep network and knowledge. But then they're not going to go and try to publish their own papers. They're actually going to go up one level and try to very actively manage a program. What are the milestones that they want to be pulling and rewarding other people to go in that direction? So I think the DARPA model is potentially usable as a way of pulling researchers in a certain direction. What if we had a DARPA program in any one of these areas? That would be a way potentially of sort of creating the fields or bootstrapping the fields. But there may also be other types of field building that maybe are kind of public relations support or helping society understand the value of a certain area of research. You certainly do see this with philanthropists. There are disease-focused foundations. They're sort of trying to build the field around a particular disease that maybe huge numbers of taxpayers don't suffer from, but a particular interest group of patients starts to try to develop a field around something. But yeah, sort of that role. And then there's also a notion that Tom Coolio has started talking about a field strategist, which is not even necessarily running the DARPA-type program or controlling huge amounts of funding initially, but just trying to write down what are actually the bottlenecks in the fields, what are actually the sort of systemic problems and questions and obstacles that researchers would face as individuals if they tried to pursue a field, and sort of make a roadmap for how philanthropists or anyone could help a field progress.
[promo]
SPENCER: So for our final topic before we wrap up, I wanted to discuss with you the idea of beneficial AI and, in particular, large language models, because there's just so much talk about them right now with things like GPT3 and ChatGPT. And so, why don't we jump into that? What's your take on what we're seeing right now with large language models?
ADAM: I'm excited about it. And I don't know what's going to happen next. I have a pretty wide, wide range of outcomes kind of floating in my mind, having not fully been able to really grok all the implications. Of course, from my vantage point, a decent amount of what I think about is how that is likely to affect scientific research. Or are there scientific research projects or problems that we could pursue that would make more use of that? I do have this feeling that the implications could be quite broad. And I guess I have a few different framings of how to think about that.
SPENCER: So to start for people who are less familiar with the idea. When we think about large language models, we're really thinking about systems that have been trained on huge amounts of text in order to predict what text comes next. So you take most of the internet, tens or thousands of books, and build a giant neural net that says, “Given I've seen a certain sequence of tokens or words, what word is likely to come next.” And then you kind of repeat that, and you can generate any text you want. But then what they do is they apply additional methods on top of that, to get them to behave in more controlled ways. For example, once they've had that basic training, then they can have the system be given training data saying, “Well, sometimes the neural net will generate this. When given this query, sometimes it'll generate that. Which would a human actually prefer?” And then they'll have a human rate it. And then they'll be able to fine tune it that way. And so they'll be able to make it give outputs that are more and more like a human would want. And so today with ChatGPT, we're seeing these systems do things like write poetry, write Python code, and answer all kinds of questions — not always accurately, sometimes they make up answers — and you can imagine training these on scientific papers as well, and having them answer scientific questions. You can imagine them having maybe even generated scientific hypotheses. So let's start with this kind of science question. What are some ideas you see in terms of how these might affect science?
ADAM: In thinking about that, I guess, my general frame on...let's imagine that large language models and sort of the class of system that they're in. Imagine that can become multimodal even beyond text or things like that, but the kind of general properties, like what if we get really good at this kind of large data prediction problem broadly? And then I think you can kind of go through the types of things that scientists do and sort of try to classify what types of skills are those, what types of cognitive activities is involved in that? And then, is there something other than the kind of raw cognitive activity that could also be a bottleneck for that? But so, when one type of skills — maybe we call this something like Type I skills — skills that kind of exists in some kind of closed world. You have some metric of success. So if I play Go, and I win the game of Go, I can get a signal just from the Go engine (or whatever you want) that says, “Yes, this is correct, you got the answer.” And then the question is, “Can I steer the system to actually do that repeatedly?” So those are types of systems where, from an AI perspective, you can train that to become superhuman. It seems, often, from sort of self-play type dynamics or sort of reinforcement learning, where you have a very dense, ever present kind of reinforcement signal. So maybe one question is: Are there things in science that actually would have this problem? Or are there parts of science where you could convert it into a mode that looks more like that. So, where are cases where you sort of have a simulator or you have an evaluator that says, “Yes, you did it right, let me generate a reward.” So that's kind of where the most extreme case, (we can talk about it a little bit) where I think AI generally is likely to have a lot of boost potential. But then there are others too, which are sort of any area — this is now analogous to sort of like fine tuning a large language model on some domain — where there's just a lot of human generated data. Let's say, maybe it's not enough that you have one person who can do a cognitive task that's involved in science, it would take too much of that person's time to actually generate the training data to sort of fine tune the language model to mimic that behavior. But maybe there are cases where there are 50 or 100 people, like if you think about AI and art. There are hundreds of people who can generate sort of very impressive, photorealistic paintings or something like that. And then the model can be trained on that. So maybe those are like Type II skills, where there's ample demonstration data. Not everyone in the world is great at writing scientific papers, but there's enough people out there that are good that you can kind of learn from that in this unsupervised learning way. And then maybe there's some other type like Type III skills. And Type III skills is something where there isn't any kind of demonstration data, but I'm not sure exactly what that would be but maybe some people think that there's some type of scientific insight. There's no demonstration data for the person who solves quantum gravity. Because no one's ever solved quantum gravity before. And so maybe solving quantum gravity has some way in which you can say that's a Type III skill. There's no demonstration data on which you can fine tune a large language model. But on the other hand, if you break down what this quantum gravity researcher is doing, they're doing a set of things, maybe for which there is demonstration data. So that's one kind of very general frame that I've been using. And then maybe we can go in and say, “What are different scientific problems that would fall in those different categories?” And then also, let's say you had all of that it was working really well, then, does this just mean that science is going to go 100 times faster tomorrow, as soon as we have LLMs on our desktop that can kind of take any one of these Type II skills (anything for which there's a lot of ample demonstration data) and then all of a sudden, I can run that instantly on my laptop without having any pre-existing skills just by prompting it. So I, as an individual, can do something that anything that at least 50 people have demonstration data for. That'd be pretty transformative. But is that transformative enough that totally changes science, or are there other bottlenecks in science?
SPENCER: Right. So maybe we can start with what are some scientific problems that you think might be more amenable to these kinds of automated techniques?
ADAM: One that I'm pretty excited about right now, we're actually thinking to some extent about whether there is a way that FRO-type mechanisms or other types of engineering projects could boost this would be actually math and sort of this idea of automated theorem proving. A lot of what mathematicians do is not formalized. The proof that someone writes down maybe is not in some fixed language that's perfectly co-measurable and checkable by other mathematicians. It's something that they, as humans, will read through and say that that seems like it makes sense. But it's not written in any single formalized kind of programming language, where you can actually have a set of axioms and actually verify or compute that the axioms actually do result in the proof. And of course, a lot of things mathematicians are doing are not even the proof itself, but the intuition behind it or discussions or visualizations or so on. Once they kind of already know how they're thinking about something, they might try to formalize in the form of a proof. But there has definitely been progress where more and more areas of mathematics are becoming formalized in the sense that they can be expressed in programming languages basically. One is called Lean, which is an interactive theorem prover. Lean is both a language in which you can formally express mathematical proofs. Also, once you've expressed enough building blocks, as you're writing this code that you're trying to express, it can kind of say, “Check. Yes, that actually is a correct proof.” “That actually does complete the proof of what you were trying to write down.” So, you can interact with your improver in a sense that they can sort of also tell you, “Yes, that is, in fact, a valid proof.” If you imagine a setting like that — sort of generating mathematical proofs — in the context of a formalized language that can express a lot of math, even if that's not mostly the way that human mathematicians do it, this is starting to seem like getting closer and closer to like a Type I skill, in terms of what I was just saying, in the sense that if the interactive theorem prover says, “Yes, you just proved what you wanted to prove.” That's like a reward signal that could be used to train the AI system. Not to mention that if people made libraries of proofs and so on with large language models, or those types of techniques, it could also just be kind of predicting what's the most likely next thing that a skilled human would try when they're trying to generate the proof. So it has elements of sort of the Type I idea of sort of it's verifiable within a simulation or within a fixed environment, very quickly to generate a reward signal and also has elements of, if you build a larger and larger corpus of proofs in a certain language, then the LLM could predict what will be the next action that one should try. So I think that's an exciting prospect. If we could really get as much of math as possible written in formalized languages, then that would be really great fodder for AI. And then you can say, “Well, how far does that go? So what is math? Can we actually say that software engineering is a sort of formal method and software?” Or can you write a program and say, “Does this program, in fact, do what I think this program is supposed to do? Is that basically equivalent to a proof kind of verifiable software? Can I verify that my operating system is actually secure, and provably secure against different kinds of attacks? Can I verify that my AI system is actually a safe AI system?” So it actually seems like this area of automated theorem proving could be really fertile, as a growth area for AI. Using LLM in just other types of methods, I think automated theorem proving may be an area that people are not quite — some people are certainly catching on to it — not recognized for how significant it could be, both for kind of fundamental math and also for things like how we can really quickly and easily generate a sort of super secure software.
SPENCER: It's interesting to me, because theorem proving has been around for a long time. But it seems like a big bottleneck of theorem proving is that how do you generate the idea for the next step in the proof? Do you just search the space of all possible next steps? That's just a total combinatorial, explosion, and intractable. But then if you bring in this idea of large language models, and if they have enough training data of (let's say) mathematicians, maybe they can actually generate a set of next steps that are reasonable, that are sort of next steps that a mathematician might generate. And it kind of cuts through this combinatorial explosion and allows you to actually pick the right next steps that get you to approve.
ADAM: Right. And so I don't know if this is going to be true. But that's one of the places where I see that in large language models saying, “What if that actually could apply to theorems as well?” And it doesn't always have to be correct. It doesn't have to be that the AI system is perfect at reasoning or something like that. It just has to be reducing this search space. And then maybe the verifier is going to then tell it whether it's actually done right. So that is not to say that AI has to do everything that mathematicians do. It is to say that, “Can you get an automated theorem proving to be pretty efficient?” And then, “Would it actually grow up to a certain level?” And, “Could it actually make its own corpus of training and so on because it can verify itself?” But I think formal math is pretty intriguing.
SPENCER: I know another idea you've had related to LLMs is how to fight misinformation. Do you want to just tell us about that briefly before we finish up?
ADAM: Well, this is another area where we're thinking about sort of beneficial AI projects. And there are some things that are coming through us and through convergent research as kind of FRO proposals that are coming in. And there's some things that we're just reading about online. So one idea has to do with better recommender systems. This idea that Konrad Kording and some colleagues have proposed a call to a System II recommender, which is to say — and you will know much more than I do about the validity or lack thereof of the sort of System I, System II concepts in psychology, but colloquially — the idea is like a lot of recommender systems (so you can think about Twitter's recommendation algorithm or something like that) — is sort of trying to bias you towards what you kind of immediately want, what you need really want to sink your attention into. Whereas System II, unlike in System I, kind of base immediate reactions in System II is like your sort of higher level planning and reasoning, longer term thinking, or something in this kind of very colloquial sense. And so, is there a way to make recommender systems that are actually optimizing for what you will want, or what you would want upon reflection? Or what yourself 10 years from now would have wanted you to do now? So, is there a way to make recommender systems that sort of reflect longer term planning, longer term goals, higher values, or recommender systems that might be more customizable? So maybe you realize that you have some biases in your recommendation system, but then, it can show you knobs and say, “Hey. You're biased along this direction. Why don't you now take the knob and say, ‘I'm going to choose to turn down that knob on which I'm biased.” And then make a less biased version of your own interests or thinking. So maybe there's a machine learning and software engineering set of problems in making better recommender systems, that are either, kind of your System II, where your higher self, upon reflection and reasoning, has more control of what it is recommending you.
SPENCER: So to give an example of that. Would the idea be that Twitter instead of showing you some clickbait or some political messages that's designed to piss you off because you're going to read it, instead, it would be optimizing for what you would have wanted yourself to see. If you were to look back over the month and say, “What were the things I read that were really valuable?”
ADAM: Yeah, exactly. Imagine that I just literally did it in that way. And then you can say, “Well, how scalable or how much would it cost to do this project?” And so on. But imagine, I looked at all of my Twitter recommendations. And I went back and the things it was recommending to me a week ago, or a month ago, and I say, “Well, this was a total waste of time. That caused me to just surf the internet for another three hours doing nothing.” But this one, this was really great. This is what caused me to go make a new friend or to make some step in my career or something. And so I could annotate those. And then I can just retrain the recommendation algorithm on my preferences, retrospect to those same choices a year or a month or a week later. That may not necessarily be the way you would implement this project in practice, but as a kind of a way of thinking about...or maybe there is actually a machine learning problem where you would have a kind of trainable system and it's just that Twitter doesn't necessarily do that, because they're optimizing for something different. And you could say this kind of FRO question: What if I had a really serious engineering team and serious resources, and could make a lot of training data and stuff like that, but for this different version of what a recommender system would be, could you do something different? And then there are various other ideas that are just sort of floating around. This guy, Aviv Ovadya, has this idea called contextualization engines, which is sort of a variation on the idea of search. So normally, when I'm searching, I'm trying to get the best result. Or maybe it's also trying to customize what's the result that I want to see. Is there a way to instead optimize a kind of large language model type system to, I query it with something and it just tells me kind of all of the relevant context of that thing. So not just the one that I want, but it sort of has effectively some of the properties that a Wikipedia article would have, in terms of the kind of relatively unbiased, relatively comprehensive kind of global picture of the content and context and different views on it, as opposed to just optimizing their relevance for a particular user or for optimizing for clicks or something like that. Is there a way to sort of optimize for providing a bunch of multifaceted, useful context? That's at least my understanding of that idea. But I think there's just tremendous potential if things are steered in the right directions for AI to kind of be epistemically positive for us, as opposed to kind of something that we're not controlling. This doesn't get into the larger kind of safety issues or agency type issues with AI, but in terms of just, concretely, what we could use LLMs for that would be positive for our epistemics or thinking.
SPENCER: It seems like the broader question is: how do you take some problem in society and put it in a machine learning context where you can have training data on it, and you can take a derivative and get a gradient and kind of actually improve in that direction? And sort of the idea of, if we somehow had this training feedback on what people actually valued in a month later, then we could throw machine learning at it. Or If we actually had some feedback on what was a relevant context that would help someone understand what they're reading on Twitter, then again, we could apply machine learning in that context.
ADAM: Yeah. And I think, in principle, those are good things to do. And then I guess the hunch of, in terms of whether it's worth thinking about this, is actually like, “Yeah, it might be expensive to generate that training data. But look, we have really powerful LLMs or whatever.” And so maybe it's just a question of fine tuning or something. Or maybe there's some really easy way to generate that from the individual user. And so that kind of comes back to the FRO question: what if you had a really serious engineering team or something like that generated around a public benefit cause, how different would that actually look from mostly what VCs or so on are going after right now? And then, some of these ideas suggest to me that, actually it could look pretty different. You could have engineering teams that make very different things than the engineering teams are making now. And it would actually be tractable to make the system to recommend or something, but I don't know if that's true.
SPENCER: Adam, thanks so much for coming on. This was a great conversation.
ADAM: Yeah. Thanks so much, I really enjoyed it.
[outro]
JOSH: A listener asks: What activities would you recommend that professors do on the first day of class?
SPENCER: So I've never really been a teacher. I've given lectures but I've never been a teacher for a whole semester, so keep that with a grain of salt. But I think what I would want to do on the first day of class is really try to motivate the students. Why are they learning what they're learning? Why is this really important? Why am I, as a teacher, really excited to teach them this? How is this gonna affect their lives? And try to frame it in terms of what they already care about. Don't just impose my value system but take their value system: why should they care, why would they be better off learning this. And I think that could lead to sort of a better course for the whole semester if they're really bought into this being something worth learning.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: